643 research outputs found
Maximum Skew-Symmetric Flows and Matchings
The maximum integer skew-symmetric flow problem (MSFP) generalizes both the
maximum flow and maximum matching problems. It was introduced by Tutte in terms
of self-conjugate flows in antisymmetrical digraphs. He showed that for these
objects there are natural analogs of classical theoretical results on usual
network flows, such as the flow decomposition, augmenting path, and max-flow
min-cut theorems. We give unified and shorter proofs for those theoretical
results.
We then extend to MSFP the shortest augmenting path method of Edmonds and
Karp and the blocking flow method of Dinits, obtaining algorithms with similar
time bounds in general case. Moreover, in the cases of unit arc capacities and
unit ``node capacities'' the blocking skew-symmetric flow algorithm has time
bounds similar to those established in Even and Tarjan (1975) and Karzanov
(1973) for Dinits' algorithm. In particular, this implies an algorithm for
finding a maximum matching in a nonbipartite graph in time,
which matches the time bound for the algorithm of Micali and Vazirani. Finally,
extending a clique compression technique of Feder and Motwani to particular
skew-symmetric graphs, we speed up the implied maximum matching algorithm to
run in time, improving the best known bound
for dense nonbipartite graphs.
Also other theoretical and algorithmic results on skew-symmetric flows and
their applications are presented.Comment: 35 pages, 3 figures, to appear in Mathematical Programming, minor
stylistic corrections and shortenings to the original versio
Faster Batched Shortest Paths in Road Networks
We study the problem of computing batched shortest paths in road networks efficiently. Our focus is on computing paths from a single source to multiple targets (one-to-many queries). We perform a comprehensive experimental comparison of several approaches, including new ones. We conclude that a new extension of PHAST (a recent one-to-all algorithm), called RPHAST, has the best performance in most cases, often by orders of magnitude. When used to compute distance tables (many-to-many queries), RPHAST often outperforms all previous approaches
An Online Decision-Theoretic Pipeline for Responder Dispatch
The problem of dispatching emergency responders to service traffic accidents,
fire, distress calls and crimes plagues urban areas across the globe. While
such problems have been extensively looked at, most approaches are offline.
Such methodologies fail to capture the dynamically changing environments under
which critical emergency response occurs, and therefore, fail to be implemented
in practice. Any holistic approach towards creating a pipeline for effective
emergency response must also look at other challenges that it subsumes -
predicting when and where incidents happen and understanding the changing
environmental dynamics. We describe a system that collectively deals with all
these problems in an online manner, meaning that the models get updated with
streaming data sources. We highlight why such an approach is crucial to the
effectiveness of emergency response, and present an algorithmic framework that
can compute promising actions for a given decision-theoretic model for
responder dispatch. We argue that carefully crafted heuristic measures can
balance the trade-off between computational time and the quality of solutions
achieved and highlight why such an approach is more scalable and tractable than
traditional approaches. We also present an online mechanism for incident
prediction, as well as an approach based on recurrent neural networks for
learning and predicting environmental features that affect responder dispatch.
We compare our methodology with prior state-of-the-art and existing dispatch
strategies in the field, which show that our approach results in a reduction in
response time with a drastic reduction in computational time.Comment: Appeared in ICCPS 201
Minimum Cost Flows in Graphs with Unit Capacities
We consider the minimum cost flow problem on graphs with unit capacities and its special cases. In previous studies, special purpose algorithms exploiting the fact that capacities are one have been developed.
In contrast, for maximum flow with unit capacities, the best bounds are proven for slight modifications of classical blocking flow and push-relabel algorithms.
In this paper we show that the classical cost scaling algorithms of Goldberg and Tarjan (for general integer capacities) applied to a problem with unit capacities achieve or improve the best known bounds.
For weighted bipartite matching we establish a bound of O(sqrt{rm}log C) on a slight variation of this algorithm. Here r is the size of the smaller side of the bipartite graph, m is the number of edges, and C is the largest absolute value of an arc-cost. This simplifies a result of [Duan et al. 2011] and improves the bound, answering an open question of [Tarjan and Ramshaw 2012]. For graphs with unit vertex capacities we establish a novel O(sqrt{n}mlog(nC)) bound. We also give the first cycle canceling algorithm for minimum cost flow with unit capacities. The algorithm naturally generalizes the single source shortest path algorithm of [Goldberg 1995]
Derandomization of auctions
We study the role of randomization in seller optimal (i.e., profit maximization) auctions. Bayesian optimal auctions (e.g., Myerson, 1981) assume that the valuations of the agents are random draws from a distribution and prior-free optimal auctions either are randomized (e.g., Goldberg et al., 2006) or assume the valuations are randomized (e.g., Segal, 2003). Is randomization fundamental to profit maximization in auctions? Our main result is a general approach to derandomize single-item multi-unit unit-demand auctions while approximately preserving their performance (i.e., revenue). Our general technique is constructive but not computationally tractable. We complement the general result with the explicit and computationally-simple derandomization of a particular auction. Our results are obtained through analogy to hat puzzles that are interesting in their own right
A Local Search Algorithm for Large Maximum Weight Independent Set Problems
Motivated by a real-world vehicle routing application, we consider the maximum-weight independent set problem: Given a node-weighted graph, find a set of independent (mutually nonadjacent) nodes whose node-weight sum is maximum. Some of the graphs arising in the vehicle routing application are large, having hundreds of thousands of nodes and hundreds of millions of edges.
To solve instances of this size, we develop a new local search algorithm, which is a metaheuristic based on the greedy randomized adaptive search (GRASP) framework. This algorithm, named METAMIS, uses a wider range of simple local search operations than previously described in the literature. We introduce data structures that make these operations efficient. A new variant of path-relinking is introduced to escape local optima and so is a new alternating augmenting-path local search move that improves algorithm performance.
We compare an implementation of our algorithm with a state-of-the-art publicly available code on public benchmark sets, including some large instances. Our algorithm is, in general, competitive and outperforms this openly available code on large vehicle routing instances of the maximum weight independent set problem. We hope that our results will lead to even better maximum-weight independent set algorithms
Recommended from our members
Late Ediacaran Redox Stability and Metazoan Evolution
The Neoproterozoic arrival of animals fundamentally changed Earth's biological and geochemical trajectory. Since the early description of Ediacaran and Cambrian animal fossils, a vigorous debate has emerged about the drivers underpinning their seemingly rapid radiation. Some argue for predation and ecology as central to diversification, whereas others point to a changing chemical environment as the trigger. In both cases, questions of timing and feedbacks remain unresolved. Through these debates, the last fifty years of work has largely converged on the concept that a change in atmospheric oxygen levels, perhaps manifested indirectly as an oxygenation of the deep ocean, was causally linked to the initial diversification of large animals. What has largely been absent, but is provided in this study, is a multi-proxy stratigraphic test of this hypothesis. Here, we describe a coupled geochemical and paleontological investigation of Neoproterozoic sedimentary rocks from northern Russia. In detail, we provide iron speciation data, carbon and sulfur isotope compositions, and major element abundances from a predominantly siliciclastic succession (spanning>1000 m) sampled by the Kel'tminskaya-1 drillcore. Our interpretation of these data is consistent with the hypothesis that the threshold required for diversification of animals with high metabolic oxygen demands was crossed prior to or during the Ediacaran Period. Redox stabilization of shallow marine environments was, however, also critical and only occurred about 560 million years ago (Ma), when large motile bilaterians first enter the regional stratigraphic record. In contrast, neither fossils nor geochemistry lend support to the hypothesis that ecological interactions altered the course of evolution in the absence of environmental change. Together, the geochemical and paleontological records suggest a coordinated transition from low oxygen oceans sometime before the Marinoan (~635 Ma) ice age, through better oxygenated but still redox-unstable shelves of the early Ediacaran Period, to the fully and persistently oxygenated marine environments characteristic of later Ediacaran successions that preserve the first bilaterian macrofossils and trace fossils.Earth and Planetary SciencesOrganismic and Evolutionary Biolog
Predictions for the Cosmogenic Neutrino Flux in Light of New Data from the Pierre Auger Observatory
The Pierre Auger Observatory (PAO) has measured the spectrum and composition
of the ultrahigh energy cosmic rays with unprecedented precision. We use these
measurements to constrain their spectrum and composition as injected from their
sources and, in turn, use these results to estimate the spectrum of cosmogenic
neutrinos generated in their propagation through intergalactic space. We find
that the PAO measurements can be well fit if the injected cosmic rays consist
entirely of nuclei with masses in the intermediate (C, N, O) to heavy (Fe, Si)
range. A mixture of protons and heavier species is also acceptable but (on the
basis of existing hadronic interaction models) injection of pure light nuclei
(p, He) results in unacceptable fits to the new elongation rate data. The
expected spectrum of cosmogenic neutrinos can vary considerably, depending on
the precise spectrum and chemical composition injected from the cosmic ray
sources. In the models where heavy nuclei dominate the cosmic ray spectrum and
few dissociated protons exceed GZK energies, the cosmogenic neutrino flux can
be suppressed by up to two orders of magnitude relative to the all-proton
prediction, making its detection beyond the reach of current and planned
neutrino telescopes. Other models consistent with the data, however, are
proton-dominated with only a small (1-10%) admixture of heavy nuclei and
predict an associated cosmogenic flux within the reach of upcoming experiments.
Thus a detection or non-detection of cosmogenic neutrinos can assist in
discriminating between these possibilities.Comment: 10 pages, 7 figure
Size-Change Termination, Monotonicity Constraints and Ranking Functions
Size-Change Termination (SCT) is a method of proving program termination
based on the impossibility of infinite descent. To this end we may use a
program abstraction in which transitions are described by monotonicity
constraints over (abstract) variables. When only constraints of the form x>y'
and x>=y' are allowed, we have size-change graphs. Both theory and practice are
now more evolved in this restricted framework then in the general framework of
monotonicity constraints. This paper shows that it is possible to extend and
adapt some theory from the domain of size-change graphs to the general case,
thus complementing previous work on monotonicity constraints. In particular, we
present precise decision procedures for termination; and we provide a procedure
to construct explicit global ranking functions from monotonicity constraints in
singly-exponential time, which is better than what has been published so far
even for size-change graphs.Comment: revised version of September 2
- …